Sunday, January 31, 2021

Some Important Codes in R (Applied Econometrics)

 ■ Some codes in R

Note: Dep var = Dependent Variable.

           Ind var = Independent Variable.

           Var 1 = Variable 1

           Var 2 = Variable 2

a) Taking log values

lvar1 <- log(var1, base = 10)


b) Correlation Test

#install.packages (corTest)

library (corTest)

cor.test(var1, var2)


c) Unit Roots Test (without deseaonalisation)

# install.packages(fUnitRoots)

library(fUnitRoots)


d) Testing Unit Root at Level (ADF)

adfTest (lvar1, lags = 0, type = "c")

e) Taking 1st difference

d.var1 <-diff (var1)

f) Lag selection criteria

#install.packages (vars)

library (vars)

Say, 

kk <-cbind (lvar1, lvar2, ... lvarn)

VARselect (kk) $selection

g) Co Integration  (Johannessen)

Say,

JC1 <- ca.jo (kk, type = "trace", ecdet = "none", K =3)

h) Eigen Value Stat

JC1 <- ca.jo (kk, type = "eigen", ecdet = "none", k = 3)

i) VECM (in p - 1 lags)

Say, 

model1 = VECM (data.frame (vars), lag = n, r = m, estim = "ML")

where, r = no. of co integrating relationship (will discuss later)

j) Impulse Response Function

plot (irf (model1, n.ahead=10))

for 10 periods.


To be contd...

Aditya Pokhrel 

MBA, MA Economics, MPA 

Saturday, January 30, 2021

Quantile Regression Intro - with reference to Eviews and R (Applied Econometrics)

 ■ Quantile Regression

Basically, Quantile regression is an extension of linear regression. It's used when the conditions of linear regressions are not met like linearity, homoscedasticity, normality.

The Quantile regression has no strong distributional assumptions.

Let's state Hypotheses.

Null (Ho): There's no significant impact on dependent variable due to independent variables.

Say that, the natural log of one of the independent variables is statisically significant, since p values is less than 0.05. If there's an increase in 1% median value of independent variables will inceease by 1.13% in the median value.

And in addition, remaining independent variable is not significant since p value is greater than 0.05.

■ Quantile regression's Goodness of Fit

The Pseudo R ^ 2 is 31 % (assume). The adjusted R ^ 2 say, is 28 %. So 28 % variation in conditional median in the dependent variable is due to the one of the independent variables.

The Quasi LR statistic value is say (29.3) and the p value is less than 0.05 that indicates that the model is stable.

■ The process of Quantile Regression (E views - with reference with R will be discussed in the next part of this blog)

1st run the linear regression and test for the serial correlations, normality and homoscedasticity. Make sure these three are insignificant. Thus, with these assumption we go for the Quantile Regression.

Estimate Eqn ➡ Method ➡ Quant Reg ➡Click ok

Choose the Quantile to estimate ➡ Select the value from 0 to 1 then Click OK.

Note: In linear regression its mean and in quantile regression its median.

We shall run the results and see results and interpreted.

For the results, Go to View ➡ Quantile Regression ➡ Process Coefficients ➡ Table ➡ Set Quantile 10 ➡ Click OK.

The result will be intrepreted in next blog.

To be contd...

Thank You

Aditya Pokhrel

MBA, MA Economics, MPA 

Friday, January 29, 2021

Meta Analysis - Basics - Part I (with reference to STATA codes)-(Applied Econometrics)

 ■ Meta Analysis basics (Source: Self made notes: Lectures from ANU, Australia) 

Generally, Meta Analysis is a statistical procedure to combine data from the multiple studies and thus study them.

In actual sense, to conduct a meta analysis it requires a huge costs and knowledge as well that are to be dealt in complex issues such as increasing precision and providing external validity.

● So why Meta Analysis then ? 

♢ Sometimes research might be biased.

♢ The studies are often underpowered.

♢ The studies might be partaially informative about each other.

● How to do it ? 

- Select topic

- Searching

- Screening

- Data Extraction

- Analysis

Let's see its sample process diagram

♢ 1st we see the search resources

♢ Then we search the String developmen.

♢ Then either one of two are chosen: Scripted Scraping Searches and Manual Searches.

♢ Then Duplicate Screening

♢ Again for the Screening Criteria Development the Title Screening are chosen as Double Entry and then followed by Reconciliation.

♢ The reference checking is done

♢ Again duplicate screening

♢ Search results

♢ Again Double Entry and Reconciliation

♢ Finding full text of papers

♢ Then the screened papers.

After these, the process goes out to the data analysis.

● Analysis

The analysis of Fixed Effect: There is one true fixed effects.

Random Effect: True effects may vary by the study.

Mixed Model: True effects may vary by the study, and we can explain some of that.

In STATA the 'metan' package is used wheras in R 'meta' package must be downloaded ( discuss about R and STATA later in upcoming blogs).

We'll see the Heterogeneity of model and Estimating Biases (Funnel Plots, Caliper Tests) in the upcoming parts.

To be contd...

Aditya Pokhrel

MBA, MA Economics, MPA 




Thursday, January 28, 2021

Structural Vector Autoregression (SVAR) modelling basics I - Applied Econometrics

 ■ SVAR (Seld made notes: Reference to IMF reports)

The question may trigger in our mind tha why we need the SVAR? 

Say, we want to know the the affects of the monetary policy in the economy.

So for this let us consider the set of events: 

a) Central bank anticipates an increase in inflation.

b) The central bank increases the monetary policy interest rate, but inflation still rises as anticipated.

Now here, one could wrongly conclude that the interest rate hike led to the rise in the inflation.

However, it was endogenous reaction to expected inflation (monetary policy contraction was endogenous).

Monetary policy reacted to the expected inflation. So, these events reflect the cobtrary to the impact of inflation expectations on the monetary policy.

But in actual sense this is not what we are wanting to measure. Here, we can't say that the impact policy has on the other variables. Simply, this is not the correct way to measure the effects of the monetary policy.

The similar problem applies with the fiscal policy as well.

Suppose, we want to know the effects of fiscal spending on the economy, so let's consider new sequence of the events.

The fiscal authorities anticipate a reduction in private demand and then increase pubkic spending causing an increase in the deficit while total output continues declining for some time.

Say, the wrong conclusion would be the spending multiplier is negative or in other words that the spending hike reduced output. So in actual sense, the fiscal reaction was endogenous. The Fiscal policy reacted to the expected output development.

So, this is not again the way of measuring tge effects of public spending on the economy. So, we can't measure the impact of monetary or fiscal policy when the policy variable is reacting to the movemens of the other variables is reacting to the movements of the other variables. 

In order to measure the effects of the policywhat we really want is to identify or to isolate purely exogenous, purely independent movement or shocks to the variable of the interest and see how the economy reacts to them which is known as the input responses.

Here we want to identify totally exogenous monetary policy rate and to fiscal  spending respectively.

To do this we have to identify the SVAR.

To be contd...

Thank You

Aditya Pokhrel

MBA, MA Economics, MPA


Wednesday, January 27, 2021

ANCOVA model -Part I (Dummy Variables) - (Applied Econometrics)

 ■ Dummy Var (ANCOVA model)

Generally, in ANCOVA models the qualitative variables are only used as regressors. 

Even that those models are extensively used in Sociology, Paychology and Education, ANCOVA models are not quite common in Economics.

The mixture of the qualitative and quantitative variables is common in Economics i.e. explanatory variables as both qualitative and quantitative variables. Such models are known as the ANCOVA models.

□ Analysis of the Co Variance models

In the context of the regression, quantitative explanatory variables are known as co variance, so they are the ANCOVA models.

In ANCOVA models our aim is to asses whether there's a difference between the groups on the basis, say, race, cast, community keeping constant the effect of the quantitative regressors.

Few examples are cited:

ANCOVA is an extension of ANOVA models.

Say, 

Yi = aplha1 + alpha2 D2i + beta Xi + ui

        Where, say, Yi = Annual Salary

                              D2 = 1 = Male

                              D2 = 0 = Female

                              X = Teaching Experience                                             (yrs)

Now say we want to study whether there's the difference between salary of male and female teachers keeping constant, the teaching experience.

Now, assuming that the CLRM is satisfied and E (ui) = 0 the annual salary of a female person is; 

E (Yi/Xi, D2 = 0) = alpha 1 + beta Xi

E (Yi/Xi, D2 = 1) = (alpha1 + alpha2) + betaXi

The hypothese are: 

Ho: aplha2 = 0

Ha: alpha2 is not equal to 0

... keeping the years of teaching constant.

If alpha2 is significant, the null's rejected, then we'll say that there's a discrimination based on gender (sex).

To be contd...

Thank you 

Aditya Pokhrel

MBA, MA Economics, MPA 

Tuesday, January 26, 2021

GMM - Basics (Generalized Method of Moments) (Applied Econometrics)

 ■ GMM

GMM is one of the generic methods to identify the parameters in the statistical models.

It uses the moment conditions that are the functions of the model parameters and the data, such that their exception is zero at the parameters' true values. The GMM is also a dynamic panel estimator.

We know that the Panel data is a Longitudinal data and generally (as discussed in the previous blogs) the T and N should be as.

● The case for GMM

Let's assume the linear regressions with the endogenous regressors,

Y = X' beta + u
               Where, Y and u are the N×1 vectors; beta is a K×1 vector of the unknown parameters.
X is a N×K matrix of explanatory variables.

Because of the assumption of the endogeneity, we assume a matrix Z that is N×L and L>K.

The Z matrix is assumed to comprise a set of variables that are highly correlated with X but orthogonal to u (i.e. a highly set of valid instruments).

● GMM specifies

a) N (Number of cross sections or groups) > T (Time Space).

b) It uses instrumental variable (IV) estimation.

c) The instruments, Z must be exogenous, E(Z',u) = 0

d) Number of instruments,  Z <= number of groups, N.

The GMM estimators are of two kinds:

i) Difference GMM
ii) System GMM

Let's say the GMM is designed to:

- Dynamic Panel Models

- The T is small and large N panels

- The independent variables that are not strictly exogenous, meaning they are correlated with past and possibly current realisation of the error term (endogeneity).

- The fixed effect are arbitrarily idistributed.

- The heteroscedasticity.

- Autocorrelation within the panel or groups.

Basically there are two instruments in GMM:

- Internal Instruments --> gmmstyle ()
- External Instruments --> ivstyle ()

i) Difference GMM

- Arellano and Bond (1991)

1. It coorects endogeneity by:

-Transforming all the regressors through differencing.
-By removing the Fixed Effects

2. The first difference transformations has weakness. It
-Subtracts the previous observations from the contemporaneous one thereby magnifies gaps in the unbalanced panel.

ii) System GMM

Two people Arellano and Bover (1995) and Blundell and Bond  (1998).

- Coorects Endogeneity by:

□ Introducing more instruments to dramatically improve efficiency.

□ Transforms the instruments to make them uncorrelated (exogenous) with the fixed effects.

- Builds a system of two equations: the original equation, the transformed one equation.

- Uses orthogonal deviations: Instead  of subtracting the previous observation from the contemporaneous one, it subtracts the average of all future available observations of a variable. No matter how many gaps, it is computable for all observations of a varable except the last for each ibdividuals so that it minimises the data loss.

To be contd...

Thank you
Aditya Raz Pokhrel 
MBA, MA Economics, MPA


Monday, January 25, 2021

KPSS Unit root testing (Applied Econometrics)

 ■ KPSS test

This test was introduced by the econometrics Kwaisoski, Phillips, Schmidt and Shim so the name was regarded in KPSS test.

This test is quite different from from the Augmented Dicky Fuller Test and Phillip Perron Test. In both the ADF and PP Ho is the process is non stationary and the Ha is stationary.

Now we compare the results with KPSS from PP and ADF.

□ ADF/PP

Ho: Yt = I (1)

Ha: Yt = I (0)

□ KPSS

Ho: Yt = I (0)

Ha: Yt = I (1)

The possible outcomes are follows:

a) Reject Ho and Do not reject Ho.

b) Do not reject Ho and Reject Ho.

c) Reject Ho and Reject Ho.

d) Do not reject Ho and Do not Reject Ho.


Now the last two outcomes are contradictory, 1st two outcomes are consisting. ADF/PP will give us result as unstationary whereas the KPSS would give as stationary.

In such a cases a solution is confirmatory data analysis as suggested in literature.

So, this is the KPSS test.

Thank You

Aditya Pokhrel

MBA, MA Economics, MPA


Sunday, January 24, 2021

Basics of Macroeconometric modelling ( Applied/Advanced Econometrics)

 ■ Methodological issues

Generally, the Macroeconometric modelling aims at the empirical behavior of an actual economic system. 

That kind of models will be regarded as the subsystems of the interlinked equations estimated from time series data using statistical or econometric techniques.

Bardsen et al. ( 2005) states that a conceptual starting point is the idea of general stochastic process that as generated all data that one observes for the economy and this process can be summarized in terms of the joint probability distribution of random observable variabls in the stochastic equation system.

They further assert that, for a modern economy, the complexity of such a system and of the joint probability distribution is evident. Nonetheless, it is always possible to take the high aggregated approach in order to represent the behavior of a few headline variables such as inflation, GDP growth, unemployment in a small scale model.

If those are small enough, the estimation of such econometric moded can be based on the formally established statistical theory  (as with low dimensional vector autoregressive models the VARS, where the statistical theory has been extended to co integrated variables.

The operational procedures of Macroeconometric modelling can be summarized below:

♢ Choosing the relavant variables the subsectors of the economy can be analysed (marginalisation).

♢ Distinguishing between the exogenous and endogenous variables the relavant partial models are constructed which will be called as the type A models.

♢ At last, the submodels built are combined in order to obtain model B for the entire economy.

Thank You

To be contd ...

Aditya Pokhrel
MBA, MA Economics, MPA

Saturday, January 23, 2021

Philip Perron Unit Root Testing (Applied Econometrics)

■ Philip Perron (PP) Unit Root Testing

This test was introduced in 1988 A.D.

As we know that the Dicky Fuller test is basee upon the three assumptions:

a) Et ~ i.i.d (0,sigma^2)

b) There must be no Serial Correlation

c) Variance ougut to be similar.

Now in the dicky fuller test, the serial correlation is taken into account by adding the difference test, that is by adding deltaYt-1, deltaYt-2, ..., delta Yt-3 that's by augmenting the original model.

The PP test uses the non parametrical statistical method to take account of serial correlation without adding any lagger differences. 

PP is identical to ADF test with the exception that is a non parametric procedure used to take into account the problem of Serial Correlation without adding the lagged differences to the original equation and PP test makes  correction for Tauhat coefficients of gamma i.e. Gamma tends to Tauhat to take account of the correlation in Et.

If there's correlation in Et the Gamma becomes inefficient so this correction of the Gamma tends to Tauhat is made in the PP test without adding any augmented terms.

So, the euquation for PP test is identical to Dicky fuller test, it's not identical to the Augmented because the Augmented terms are not there.

Thank You

Aditya Pokhrel

MBA, MA Economics, MPA

Friday, January 22, 2021

Indirect Least Squares (Basics - Applied Econometrics)

 ■ ILS

ILS is used to identify the parameters of the Just Identified model ( will discuss about rules of identification in upcoming blogs.

In the just identified, there is one correspondence between the reduced form coefficients and structural parameters.

So from the reduced form coefficients, we shall obtained the unique estimates of the structural parameters.

ILS has 3 steps:

a) First, obtain the reduced form equations (each reduced form equations express endogenous variables as a function of the pre determined variables and the stochastic error term).

b) Second, apply the OLS to the reeuced forl equations, (such estimators will be consistent).

As we will derive later that pi1, pi2, pi3, etc are the reduced form coefficients and the estimated reduced form coefficients are pihat1, pihat2, pihat3, etc.

In general they are consistent, but in the some condition these pi's will be 'best unbiased', if the explanatory variables are purely non stochastic, but explanatory variables will not contain the lagged values of the endogenous variables.

c) Third, we obtain the values of the structural parameters from the reduced form coefficients utilising the relationship between structural parameters and reduced form coefficients.

We are applying OLS not on structural equation so they are Indirect Least Squares.

To be contd ... 

Thank you

Aditya Pokhrel

MBA, MA Economics, MPA

Thursday, January 21, 2021

Tobin's q (Marginal/Average q)(Investment - Advanced Macroeconomics)

 ■ Tobin's q

It's known to all that q summarizes all the information about the future that's relevant to the firm's investment decision.

Actually q shows how the additional value of Nrs of capital affects the present value of the profits. Thus the firm wants to increase its stock capital q is high and reduce it if q is low. This implies it doesn't need to know anything about the future other than the information that is summarized in q in order to make this decision.

Romar stresses that the q has natural interpretation. Any one unit increase in the firms capital stock increases the present value of the firm's profit by q.

Similarly, it raises the value of the firm also by q. Hence q is the market value of the unit of capital. 

If there's a market for shares in firm, for example the total value of the firm which one more unit of capital then other firm exceeds the value of the other firm by q.

To note, we also have assumed that the purchase price of the capital is fixed, q is also the ratio of the market value of a unit of capital to its replacement costs.

We state the firm increases its capital stock if the market value of the capital exceeds what is costs to acquire it,  and decreases it's capital stock if the market value of the capital is less than what it costs acquire it.

Simply, the ratio of market value to the replacement costs of capital is known as Tobin's q. 

The analysis implies that what is relevant to investment is the marginal q - the ratio of market value of the marginal unit of capital to its replacement costs. 

The marginal q is more difficult to measure than the average q - the ratio of the total value of the firm to the replacement costs of its total capital stock. The the relationship between marginal q and average q must be known significantly.

Sometimes, in the model there would be that the marginal q is less than the average q. The reason maybe that when the adjustment costs depending only on k was assumed, means that the diminishing return to scale in adjustment costs was there. 

Due to this assumption of the diminishing returns, firm's lifetime profit (pi) rise less than proportionately with their capital stocks and thus the marginal q is less than the average q.

Now, if in case we happen to modify the model into the constant returns to scale in the adjustment costs average and marginal q are equal , Hayashi (1982).

This implies that the q determines the growth rate of the firm's capital stock on CRS. With this firm choose the same growth rate of their as their capital stock. 

For instance, say a firm has initially twice as much capital as another and if both profits optimize, the larger firm will have twice as much capital at every future time.

In addition to this profits are linear to the firms capital stock, this implies that the present value of the firm's profit, the value of (pi) when it chooses the path of its capital stock optimally, is proportional to the initial capital stock.  Hence, in this case the average q and marginal q are equal.

Now, in other models there are potentially more significant reasons than the degree of returns to scale the adjustment costs, then the average q may differ from the marginal q. 

Say if a firm faces a downward facing demand curve for its product for example, doubling its capital stock likely to less than double the present value of its profit. Hence, marginal q is less than the average q.

Say, if in case a firm owns a large amount of outmoded capital it's marginal q definitely exceeds the average q.

Thank You

Aditya Pokhrel
MBA, MA Economics, MPA





Wednesday, January 20, 2021

Criteria for model/lag selection (Applied Econometrics)

 ■ The commonly used criteria

a) R ^2

b) R'^2

c) Akaike Information Criterion (AIC ) 

d) Schwartz Information Criterion (SIC)

e) Hannan Quann  Information Criterion (HQIC)

f) ML criterion (Will be discussed in upcoming blogs)

The explanations are:

a) R ^ 2

R ^ 2 = ESS ÷ TSS

Is the only measure of goodness of the fit  (only a sample) not the population fit. So nothing is to do with the CLRM.

R^2 is non decreasing function of additional regressors. When an additional regressors is included in the model the R^2 is invariably increases and never decreases.

If we've two models, one with one explanatory variables, R^2 from the second model will be larger than R^2 from the 1st model.

Now we have the tenptation to choose the second model but this doesn't become a good procedure, because, that improves no penalty on additional regresors present in the model we use R'^2.

b) R'^2

R'^2 = 1 - Summation [[Ui hat]^2]

Thus, R'^2 = 1 - [[RSS/n-k] ÷ [TSS/n-1]]

                   = 1 - ( 1 - R^2) (n-1/n-k)

In general, R^2 < = R'^2

Now we choose the model with thr R'^2 highest.

The degrees of freedom (n-k), these are the penalty factors for adding the additional regresors.

c) AIC

We get, 

AIC = [e ^(2k/n)] × [Summation (Ui^2)/n]]

AIC = [e ^(2k/n)] × [RSS/n]]

Taking natural log on both sides

ln AIC = ln [e^(2k/n)] + ln [RSS/n]

ln AIC = 2k/n + ln (RSS/n)

This 2k/n is the penalty factor.

Comparing to R'^2, the AIC improves a harsher penalty ( = 2k/n) for introducing more regressors and when we compare two models on the basis of AIC the criteria is to select the model with the lowest AIC.

AIC is also used for lag length selection.

d) SIC

SIC = [n^(k/n)] × [Summation (Ui^2)/n]

SIC = [n^(k/n)] × [RSS/n]

Taking natural logs on both sides

ln SIC = k/n ×ln(n) + ln (RSS/n)

The penalty factor is k/n × ln (n)

SIC improves more harsher penalty. SIC aleo would be selected low.

e) HQIC

ln HQIC = ln (Sigmahat^2) + 2k/n × ln [ln (n)]

ln HQIC = ln (RSS/n) + 2k/n × ln [ln (n)]

(2k/n) ln [ln (n)] is a penalty factor

Moreover, there are no any hard and fast rule for any of these criterion. The least one is always encouraged to choose in a particular model auch as be it ARDL, VAR lag selection.

Thank you

Aditya Pokhrel
MBA, MA Economics, MPA






Tuesday, January 19, 2021

Relationship between R square and F stat (Econometrics - Basic)

■ Recast between R square and F

Now we can re cast F in terms of R^2. We an formulate the F stat in terms of R square.

We consider:

Yi = b1 + b2X2i + . . . + bkXki + ui

Let's formulate the hypothesis:

Ho: b2 = b3 = . . . = bk = 0

When we test the overall significance and the intercept is not included, our aim is to find out whether Yi is related to the exolanatory variables or not.

Then, 

F = (ESS ÷ k -1) / RSS ÷ n - k)

where, ESS, RSS, k and n have usual meanings .

F = (ESS ÷ k - 1) × (RSS ÷ n - k)

F = [(n - k) ÷ (k - 1)] × [ESS ÷ RSS]

F = [(n - k) ÷ (k - 1)] × [ESS ÷ TSS - ESS]

F = [(n - k) ÷ (k - 1)] × [ESS/TSS ÷ TSS -          ESS/TSS]

F = [(n - k) ÷ (k - 1)] × [R ^ 2 ÷ 1 - R^2]

F = [(R ^ 2) / (k - 1)] ÷ [(1 - R ^ 2) / (n - k)]

F is directly propotional to R ^ 2

If R ^ 2 = 0 then F = 0

If R ^ 2 = 1 then F = Undefined

Higher the value of F - higher will be the R ^ 2 (F is directly proportional to R ^2).

And we also do know that when Fcal > Ftab.

Thus this is the relationship between R ^ 2 and F. Ho is rejected.

Thank you

Aditya Pokhrel
MBA, MA Economics, MPA



Monday, January 18, 2021

The Easterlin Hypothesis (Economics of Demography)

 ● The Easterlin Hypothesis

Easterlin (1961, 1969, 1973) states that the positive relationship between income and fertility is dependent on relative income.

Normally, it's considered that the first viable and a still leading explanation for mid twentieth century baby booms.

The hypothesis as formulated by Richard Easterlin presumes that material aspirations are determined by expressions rooted in family background: he assumes first that young couples try to achieve a standard of living equal to or better than they had when they grew up. This is a relative status.

If income is high relative to aspirations and jobs are plentiful, it will be easier to marry young and have more children and still match that standard of living.

However, when jobs are scarce, couples who try to keep that standard of living will wait to get married and have a fewer children.

Children are normal goods once this influence of family background is controlled.

For Easterlin the size of the cohort is a critical determinant of how easy is to get a good job.

A small cohort means less competition, a large cohort means more competition to worry about. This assumption generally blends economics and sociology.

Thank you
MBA, MA Economics, MPA

Sunday, January 17, 2021

Spillman Production Function (Agro Economics)

 ● Spillman P.F

This function suggested by Spillman can be considered in the form of the following expression:

Y = [M - (A ×R ^ x)]

where, x = level of output
              Y = resulting output
              M = maximum ouput that can be                              attained
              A = Total increase in output due to x
              R = ratio of successive units of input                       to total output

A maximum product is never reached and decreasing total product is not allowed.

As can be easily checked, the Spillman P.F is characterized by the positive but decreasing marginal returns and when very input but "x" is constant.

The elementary transformation show that the isoquant of the Spillman P.F is linear.

Spillman was interested in determining whether or not the law of diminishing returns had empirical support within some rather basic agricultural production processes.

The Spillman P.F has seldom been used by agricultural economists. It's primarily of historical interest because the Spillman research represented one of the first efforts to estimate parameters of production function for some basic agricultural processes.

This is all about brief concept on Spillman P.F.

Thank You
Aditya Pokhrel
MBA, MA Economics, MPA

Saturday, January 16, 2021

Transcendental Production Function (Agro Economics)

■ Transcendental P.F

Generally, the CDPF didn't properly represent the neo classical three stages production function.

The problem of greatest concern at that time was the fixed production elasticities, which require that APP (Average Physical Productivity) and MPP (Marginal Physical Productivity) be at a fixed proportion to each other.

This issue was not unrelated to the fact that the CDPF could represent only one stage of prduction at a time, very much unlike the neo classical presentation.

Halter, Calter and Hocking were concerned with the lack of compatibility between the CDPF and the neo classical three stages P.F (Production Function).

Researchers ought to make modifications in CDPF to allow for three stages of production and variable production elasticities, yet the same time retain a function that was clearly related to the CDPF and was easy to estimate from agricultural data.

The function that Halter et al. introduced in 1957 looked like a slightly modified version of the CDPF.

The base of the natural logarithm, "e" was added and raised to a power that was a function of the amount of input that was used.

The two input function was said was: 

Y = [{Ax1^alpha1} × {x2^alpha2} × {e^(y1*x1+y2*x2)]

Thank you
Aditya Pokhrel
MBA, MA Economics, MPA

Friday, January 15, 2021

Ricardian Equivalence Discourse (Advanced Macroeconomics)

 ● Ricardian Equivalence Parley

As we are all familiar with the Ricardian equivalence which pivotally focuses on the tax cuts in the short run so as to levy the higher tax rates in the future (will be discussed about Ricardian Equivalence in the upcoming blogs). 

Now, today what hit my mind is that, recardian equivalence actually is the result of the irrelevance of the government's financing decisions.

Let's take an example, say government is granting is "V" amount amount of bonds to the "H" households at "t1" time and the government is planning to retire the bond at the later time "t2".

So this implies the each of the households be taxed, [{e ^ R(t2) - R(t1)} × V] at time t2.

So this policy has 2 effects basically on the representative households. First, the household has acquired an asset i.e. bond that has Present Value as of t1 of V.

Second, it has acquired liability - the future tax obligation and that has the Present Value as of t1 as V.

Hence, the bond doea not represent net wealth to the households and it thus does not affect the consumption pattern of the Households behavior.

The household simply saves the bond and interest that the bond is accumulating until time t2 at which point it uses the bond interest to pay taxes thus the government is levying to retire the bond. 

The result follows solely from the households and government budget constraints, and not from any other features of the model.

Now, the traditional economic models assumes the shift from the tax to the bond finance increases consumption. The traditional analyses of consumption, for instance, mostly model consumption that depend only on current disposable income i.e. Y - T.

The views on consumption by Ricardian and Traditional domain have quite different implication when it comes to several policy issues.

The traditional views implies that the budget deficit are increasing consumption and thus reduce capital accumulation and growth. However, the Ricardian concept implies that they are having no effect on consumption and capital accumulation. 

For instance, government often cuts taxes in recession to increase consumption spending, nonetheless,  in the case of Ricardian Equivalence  all these process goes in vain.

Overall, it's important to determine which concept is near to the reality.

Thank You
Aditya Pokhrel
MBA, MA Economics, MPA




Thursday, January 14, 2021

Malthus - Ricardo controversy over Glut (History of Economic Thought - Economics)

● Controversy over Glut

Generally the overproduction refers to the condition when the market has goods but there's no any demand.

This rests on the Says Law of Market and the whole controversy rests over here.

Basically, the Says Law of market states that te supply always fulfills its stated demand.

So, there might be temporary overproduction and unemployment even Says argued on that but general unemployment problem won't be there and full employment equilibrium.

As per Malthus, when the things are being produced it means that the producer's they are saving for capital accumulation i.e. savings of the economy can be used for the capital investment. 

This means that to that extent; the amount saved is not being consumed and due to that there will be decrease in the demand and we can't tell that the whatever are being produced is being demanded and the overproduction/glut is prevalent to that period until and exogenous consumption such as unproductive part which is not participating in this sector; be it older savings i.e. should they start consuming or not.

Nonetheless, Keynes explained it later that the gap will be filled with Income generation and Investment.

Hence, Malthus stated that whenever from the side of the economy and until there'll be no any consumption from unproductive classes the gap won't be filled and due to this there would be Glut/Overproduction.

Due to this overproduction there might result in unemployment because whatever producing is making s/he is not able to sell that.

Investment increase (capital increase) > Savings increase > Consumption decrease so Demand decreases.

He also stated that producers propensity to save is more and if they are also less than the labor class then there's overproduction.

■ Ricardo's answer on this

Ricardo was on his belief on Says Law of  Market.

" The accumulation of capital; (by capitalists Investment) this process is not itself an unending and uninterrupted process, i.e. it can go indefinitely or there might be downfall or the same.

Ricardo said that, Whenever there's an accumulation of capital, due to that the labor demand increases which means wages also increase then the cost of production also increases and the profit decreases. So, this thing decelerates the capital accumulation. This is also the first effect.

But, the second effect of this is the whole process is that when the wage increases the propensity increases and labor supply increases (since population increases).

Through the increase in the propensity due to higher wages labor supply increases (because wage has already increased) due to the increase in demand will neutralize. So, in this way these things move in a cycle.

So, the capital accumulation increases then wage increases then labor supply increases then again wage decreases.

So, this continues on and there'll be no glut in the economy and the economy will be in equilibrium. This particular thing was supported by Says Law, Classical Economics.

Thank you

Aditya Pokhrel
MBA, MA Economics, MPA

Wednesday, January 13, 2021

Learner's Symmetry Theorem (International Economics)

■ Learner's symmetry Theorem


Economist A.P Learner explained -" The ad valorem taxes on export and ad valorem tariff on import will have the same effect.

If tax imposed on export then domestic price of importable good will remain same as world price.

This imposed ad valorem duty on import would lead to increase in import prices but price of same commodity in the world market would be decreased. The optimum export tax exploit the monopoly power of a country in the export good market.

Optimum export tariffs exploits the monopoly power of a country in the import goods market. 

If there's symmetry between subsidy on export and imports, these will reduce importable price and increased volume of export and import.

Both kinds of subsidy in large country will reduce the welfare.

● Effects of tariffs

♢ Effect of tariffs in short run

A tariff impose price of importable goods increases. Capital lobor ratio is constant. Real income shifts from export industry to factor used in import - competing industry.

♢ Effect of tariffs in medium term

Imposition of tariff in meduim term lead to increase in real return of capital to import competing sector and decline in real return to capital to the export sector. The capital labor ratio will decline.

♢ Effects of tariff in long run

Thr imposition of tariff in the long run lead to the real return of abundant factor will fall in thr long run and scarce factor will rise as Sam Stopler Theorem (will be discussed later in upcoming blogs).

The capital labor ratio will increase and the factor of production are mobile.

Thank you

Aditya Pokhrel
MBA, MA Economics, MPA

Tuesday, January 12, 2021

Ridge Regression (Basic) (Advanced Econometrics - Recent Developments in Regression Concepts)

 ■ Ridge Regression

Two people, Hoerl and Kennard (1970) developed the concept by proposing the class of estimators defined by:

betahat (gamma) = (X'X +gamma I)^-1 X'y, are called as the ridge estimators.

If we see in terms of machine leraning, ridge regression comes under the category of the L2 regularisation,  because it uses L2 norm of the Euclidian distance.

Ridge regression does dimensionality reduction in our feature space when we havr larger number of estimators to visualize.

The dataset is generally divided into the Training set and Test set. The model we have is developed on the basis of the Training set.

When we deploy this into the test set (on training set it performs well). In the training set if we want to calculate the error, most of the points pass through the best fit line that we have,

Sigma ( yi - yhat)^2 = 0

But when we deploy this particular linear model on our data set, our errors will just shoot out. Thus we got to penalize our linear model for doing this because it may have so many test errors. 

The problem of overfitting is there when our Training accuracy is high and our Test accuracy is very low. Thus, penalising the model for making the error are the residuals errors. 

● Why Ridge estimators developed ?

Amemiya (1998) tells that Hoerl and Kennard chose these estimators because they did hope to alleiviate the instability of the least squares estimators due to the near singularity of X'X by adding a positive scalar "gamma" to the characteristic roots of X'X. 

They also proposed that the ridge trace method determines the value of "gamma". The "gamma" be determined as the smallest value at which the ridge trace stabilizes.

This study has 2 major weaknesses: 

¤ The point at which the ridge starts to stabilize cannot always be determines objectively.

¤ This method lacks the theoritical justification in as much as its major justification is derived from certain Monte Carlo studies, which, though is favorable is not conclusive.

In upcoming blogs i will be writing about more on variteties of Ridge Estimators called as Generalized Ridge Estimators, some of which involve the empirical bayes method of determining "gamma".

Note: This is just the simple concept on Ridge Regression. In upcoming blogs continuation of this will be there and I will be presenting the equational framework and slight more concept including the use of hyperparameter and all.

Source: Self made notes on Machine Learning and Advanced Econometrics, Amemiya (1998).

Thank You

Aditya Pokhrel

MBA, MA Economics, MPA


Monday, January 11, 2021

What's Fibonacci Retracements ? (Technical Analysis - Share - Finance)

● Fibonacci Retracements

Generally, retracements are temporary price reversals tha take place within a larger trend. After a small reversal price breaks its previous high.

This happens mostly due to the profit booking or the price correction. The ways to find the retracements levels in advance.

This concept is developed more than 800 years and was developed by Leonardi Fibonacci. As per the "Vedic Mathematics" the very concept of this comes within.

Fibonacci Retracements are ratios used to identify potential reveral levels. For instance our body also grows on the basis of the Fibonaccis series.

The sequence is: 

0+1 =1

1+1 = 2

2+1= 3

3+2= 5

5+3= 8 ...

34+55 =89  

55+89 = 144 

89+ 144 = 233

144 + 222 = 377 and so on.

Then 34 ÷ 55 = 61.81%

55÷89 = 61.79 %

34 ÷ 89 = 38.20% 

34÷144 = 23.61 % and so on.

■ Drawing Fibonacci Retracements

Firstly eatimate a trend line of the share price movement. Then state the top (peak) point and the low (bottom) point.

Bottom point reversals state the chances that from there the prices will rise.

We use the trend from top - bottom = to find uptrend and the trend from down - top = to find downtrend.

Then we have to connect the top and bottom point. Then after this connection, the Fibonacci percentages lines are drawn on 100%, 61.8%, 38.2%, 23.6% [Software does this, don't stress :)].

□ Corrections

Now after this we will wait and see that from where the prices will correct. The correction can come in 23.6, 38.2 or 61.8.

To stipulate, the Fibonacci Retracements levels are used as a good indicator to Exit a Trade.

□ Short and Long strategy

I we did long at the bottomest point, then our 1st target is the point cut by 23.6 on the upward trend where we can exit or we can trail stop loss.

Our 2nd target is 38.2 on the upward trend.

But if we did short sell when the price was going down, but the price started going up, then we will gain loss. Now, in that the 1st 23.6 we will hence get a small correction.

It is in the after correction swing the point where we can trade.

● BEWARE using Fibonacci concept

¤ Never ever mix the reference point.

¤ Never ignore long term trends.

¤ Never rely standalone on Fibonacci alone.

¤ Never ever use Fibonacci over short intervals ( one must take at least 3 to 4 months).

Thank you

These are from the references out of mine self made notes. Sorry for not representing pictorially.

Aditya Pokhrel

MBA,MA Economics, MPA




Sunday, January 10, 2021

Unit Root Testing : Why many tests? (Applied Econometrics)

● Why there are so many unit root testing ? 

The answer lies within the test and the power of the test.

The size entails out on the level of the significance i.e. the probability of committing Type I error (alpha).

The Power of the test: The probability of rejecting Ho when Ho is false.

Power is calculated by subtracting the probability of Type II error.

Say, probability of the Type II error is beta.

So, Power = 1 - beta.

The maximum power is 1 (if probability of accepting a false hypothesis is 0)

In most of the unit root tests: 

        Ho: Non Stationary

        Ha: Stationary.

Say that if the first model is a true model is a true model, we estimate the 2nd model (when the pure random walk in there we estimate 2nd model).

If thr process is stationary @ 5 % i.e. alpha = 5% then the Null hypothesis is rejected.

However, this is not the true level of significance, this is the nominal level of significance, the true level of significance is much higher.

If we experiment in the different models the true level of significance appears to be different. This is hence pointed out by "Lowell".

Most of the Dicky Fuller tests have less power. They tend to accept Ho more frequently (to accept a false Ho repeatedly).

The reasons are - power depends on the span of the data (power s high when the span is larger) 30 obs in 30 yrs - more power (high span), 100 obs in 100 days - less power (low span).

》Say that if phi (coefficient of the main equation) is nearly  = 1, say 0.95 (not strictly equal to 1, we mag declare the test non stationary).

》If we have more than one unit, then test is Dicky Pandulene.

》If there are structural breaks, then the conventional units will not catch the structural breaks and Token Watson discusses non stationarity arises due to 2 errors, viz.

          ♢ one will be stochastic trend (the conventional unit root will capture this.

          ♢ structural breaks (conventional unit root procedure will not catch this issue).

Again Madalla and Kim rejected the acceptance of ADF due to its low power, its inability to detect structural breaks etc.

Reference: My guru Prof Thomachan, University of Calicut, Kerala.

To be contd...

Thank you

Aditya Pokhrel

MBA, MA Economics, MPA.

Saturday, January 9, 2021

Phenomenon of Spurious Regression (Co Integration - Applied Econometrics)

■ Spurious Co relation

Viewing this concept in the cross section concept.

Spurious correlation in this context is used to describe the situation when two variables are related even though they are not correlated but they are correlated through a third way.

In the cross section, variables above way occurs, but in the context of the time series it can occur with I(0) variable.

Also we find the spurious relation in time series if they are increasing or decreasing trends.

Say, that the trend is deterministic, then the problem can be solved easily by introducing trend as an additional variable in the regression model.

The problem of Spurious correlation was first discovered by Yule in 1926.

But if we have two series Y and X but not trending even in that case we get a significant relation in the case of spurious regression;

Say that, Xt = Xt-1 + at

                 Yt = Yt-1 + bt

These are the examples of pure Random Walk.

Xt and Yt are independent with at and bt satisfying the classical assumption.

If we let Xo = 0 and Yo = 0, then we consider a regression;

Yt = beta t + beta 2 Xt

Ho: beta 2 = 0 

Ha: beta 2 is not equal to 0

(Xt and Yt are independent random walk process, they are not related).

If null hypothesis is accepted, then there's no any problem, X and Yt are independent random walk processes (they aren't related).

However, Granger and Newbond in 1974 had shown that even of Xt and Yt are independent if we regress Yt on Xt, the level of significance is much larger than conventional level of significance  (1% or 5%).

This was further confronted in 1993 by Davidson and McKinon.

Granger and Newbonds suggested that whenever R ^ 2 > D.W we must suspect that the regression is spurious.

● Implications of spurious regressions.

¤ Chance of getting non 0 estimates even though there's no any relationship between is very high.

¤ The estimated coefficients are mislead.

Say that we regress Y on X the betahat2 is significant or if we run correlation between X and Y the "r" is significant. Let's take dX and dY, now if the correlation between dY and dX is also significant then its not spurious and vice versa.

The exceptions in making stationary will be considered in upcoming blogs.

Thank you

Aditya Raz Pokhrel

MBA, MA Economics, MPA.  

Friday, January 8, 2021

ARCH and GARCH modelling Basics Part 1 (Applied Econometrics)

● Basics of ARCH and GARCH modelling.

Previously the modelling was only concerned to the modelling the mean term only.

We did not consider to model the variance of Y. This is the concern of modelling the variance.

This also refers to the modelling the attitude of the investors. Modelling variance in the Financial Econometrics is what the central idea is all about.

Its about the volatality of the risk. The models capable of modelling volatality or variance of the series.

Our concern is not the Ecpected value here but the risk (variance). The heteroscedasticity is encountered in cross section data (unequal variance), due to the heterogeneous nature of the individuals and entities.

Say there is the time series data involving asset returns, such as stock return on Foreign Exchange, we observe the autocorrelated heteroscedasticity i.e. heteroscedasticity observed over different periods and such a phenomena is called as ARCH (Autoregressive Conditional Heteroscedasticity).

Suppose, for modelling of the Financial time series, 

If we have NEPSE index and if we take:

dlog (NEPSE) = dlog (Price) = dP/P × 100 %

Then we will get daily returns.

There is considerable variability in the daily returns.

The Finacial data exhibits a phenomena called as the volatality clustering.

VOLATALITY CLUSTERING: The periods of turbulence in which the price is widely distributed and periods of tranquility (wild and calm periods).

The mean of the process remains constant but the conditional variance changes over time.

However, the asset prices are non stationary (for eg, NEPSEs price) but the returns are stationary, it is also volatile.

Suppose we calculate the daily return of NEPSE i.e. rt,

Sigma [(rt - r') ^2] / n - 1 = Variance; r' = average return.

This variance will not capture volatality clustering, the reason is that it's unconditional variance. This is also the long run variance. This won't consider the past history of accounts. This won't consider time varying volatality in returns. Conditional on past volatality. Such value is depicted as ARCH.

We suppose,we purchase an asset a share at time period t, suppose now we want to sell it at  t+1 and for this a investor,  a forecast of the t+1 is important but that's not enough, becauee the variance of the returns may play an important role.

If ee purchase an asset today and plan to sell on t+1 then if the value of t+1 is low we'll loose, but if the vlaue o t+1, is high we'll gain a lot.

So, the variance of the returns is required during the holding period.

Thus, the unconditional variance is not useful. Conditional variance (daily basis) is only significant.

To be contd...

Thank you

Aditya Pokhrel

MBA, MA Economics, MPA.

Thursday, January 7, 2021

NIFRA's IPO issue buzz (Finance - Share Analysis)

NIFRA's IPO

NIFRA is the only infrastructure bank in Nepal which's quite nascent to the banking fraternity.

It is on its way to release IPO on,

IPO's release date: 2077/10/02

IPO's closure date: 2077/10/06 (after banking hour)

In case the IPO is not issued wholly then IPO's closure date would be on 2077/10/16.

NIFRA is going to release 80 million shares as for the ordinary shares.

The O/S shares allotment would be in following manner:

▪ 160,000 shares for employees under NIFRA.

▪ 4000000 shares for Joint Investment Trust.

▪ 75840000 as an ordinary shares to be issued.

☆ About NIFRA

Authorized Capital: Nrs. 40 Arab ( each share having FV of Rs. 100).

Issued Capital: Nrs. 20 Arab. Out of 20 Arab of 20 crore shares 12 crore shares have been alloted to institutional shareholders and the remaining 8 crore shares are leftover and to be issued and paid fully to ordinary shareholders.

□ Fundamental Analysis - brief (Why to invest ?)

Per share net worth - on FY 2077/78 it was Rs. 117.69 which was 113.62 in the FY 2076/77.

Earning Per Share - 5.58 (FY 2077/78) and projected to increase to 7.67 on FY 2078/79.

The bonus share of 8% is projected as the company's share capital is projected to increase by 8% in the FY 2078/79 and so on (contingencies applied). 

□ Technical Analysis - This would be done in detail in upcoming blogs with clear explaination in reference to more than 120+ indicators.

■ Where NIFRA will invest?

- Nrs 4 Arab 32 Crore on Loan and Borrowings.

- Nrs 3 Arab 68 Crore on thr other investments.

● Ratings

ICRA Nepal has granted a "BBB" rating to this IPO which means, this is less risky and is fruitful in terms of investment criteria.

● Application

One can minimum apply 10 shares (FV = Rs.100)

Maximum cut off 2 crore shares, but as we all know if one applies for shares more than Nrs. 500000 then one must submit the PAN details.

Hence, this is just a brief information.

Thank you.

Aditya Raz Pokhrel.

MBA, MA Economics, MPA. 



Wednesday, January 6, 2021

Panel data framework selection (Econometrics)

Panel Data Selection Framework

This is only a brief outline on Panel data framework.

T = Time Series Component
N = Cross section cardinality

● The Conditions are:

♢ If T > 25 and N > 25, then go for CD test. 

If there's no CD go for 1st generation Panel Unit Root Test, IPS Lin Levin. If DV(0), EV(0), Pooled OLS, Fixed Effect, Random Effect and Seemingly Unrelated Regressions (SUR).If DV (1), EV (1,0, or mixed) ARDL(PMG, MG, DFE).

If there's CD, 2nd generation panel, Unit Root Panel CIPS. If DV(0), EV(0) Common Correlation. In the same way, if DV(1), EV (0,1, or mixed) Dynamic common correlation effect.

♢ If T < 25 and N > 25, then go for GMM.

♢ If T > 25 and N < 25, then use 1st generation panel. If DV(1), EV (1), go for Pedroni, Fisher KAO Co integration.

♢ If T < 25 and N <25, Pooled OLS - FE and RE.

The abbreviations: 

a) DV (1) = Dependent Variable follows integration of 1st order.

b) DV (0) = Dependent Variable is stationary.

c) EV (1) = Independent variable follows 1st order of integration.

d) EV (0) = Explanatory Variables are stationary.

e) CD = Cross Sectional Dependency.

f) FE = Fixed Effect.

g) RE = Random Effect.

h) GMM = Generalized Method of Moments.

i) PMG = Pooled Mean Group.

j) DFE = Default Fixed Estimator.

k) MG = Mean Group.

This is it. This is just the brief outline on the selection of Panel data framework selection that I made with my notes. The details on panel data will be discussed later over here in this blog. 

Hope this helps aspiring researchers.

Thank you


Aditya Raz 
Pokhrel
MBA, MA Economics, MPA. 

Tuesday, January 5, 2021

The Model Selection Criteria (Econometrics)

Model Selection Criteria


Many of us have studied in Econometrics or Statistics about the model selection criteria by CLRM approach. It is said that with congruency to the CLRM assumption that how the model is specified and hence no any specification error occurs.

The basic queries dealt over here are:

a) How do we find out the correct model ? 
b) What is the criteria to select a good model ? 

■ Other important coverages are:

♢ What are the types of specification errors that a person commits while specifying the model ? 

♢ What are the consequences of model specification errors ? 

♢ How does one find out whether model specification error is committed or not ? 

♢ Having committed the specification error, how to correct it? 

♢ How does one choose between the competing models ?

These are the several concerns of the model selection criteria. Richard and Henry discussed the criteria for an econometric model.

● A model adopted for empirical analysis should satisfy the following conditions:

☆ Should be data admissible (the predictions made from the model must be logically permissible).

☆ Consistent with the theory.

☆ Weakly exogenous regressors (the explanatory variables should be uncorrelated with the error term). 

This will be automatically satisfied if X is assumed to be exogenous, if X is endogenous we have to ensure that the Co variance ( Xi Ui), COV(Xi Ui) = 0, else we will get inconsistent estimates. This problem is the ENDOGENEITY PROBLEM (will be discussed in up coming blogs). 

The system must exhibit the following:

¤ It must exhibit the parameters consistency. The value of the parameters must be stable.

¤ It must exhibit the data coherency. The residuals obtined from the model should be purely random.

¤ Should be encompassing ( the choosen model should be in alternative cases, also known as nesting).  

Thank you.

Aditya Raz Pokhrel
MBA, MA Economics, MPA. 

Monday, January 4, 2021

Durbin h test (In Autoregressive Models)

Durbin h test.


We are familiar with the Durbin Watson test. Over there as we studied it in Masters in Economics as well, in the DW test the model should not contain lagged values of the dependent variable as an explanatory variable, the explanatory variables should be strictly exogenous.

However, to detect autocorrelation in Autoregressive models, if in case we apply OLS method, say d is almost 2 (Range of d os 0 to 4). We know if the value of d is around 2 there's no sign of autocorrelation.

In autoregressive model if we apply Durbin Watson to detect autocorrelation, we will erroneously declare that there's no autocorrelation in the model, even there's autocorrelation actually in the data. So, there the Durbin Watson test can't be used.

Durbin has suggested another large sample test to detect the first order autocorrelation in the autoregressive models. So the statistic is the Durbin h statistics.


h = rhohat * {n / (1 - n [Var alphahat])}

where, n = sample size.

             Var aplhahat = Variance of the coefficient of y t - 1.

              Rhohat = estimate of the coefficient of the 1st order autocorrelation.


Now, for large samples, Durbin ad shown that.

Ho : Rho = 0

Ha: Rho is not equal to 0

[ut = rho ut-1 + ephsilon t]

So, h asymp ~ N (0,1)

i.e. in large sample h is the standard normal variable.

In practice  rhohat = 1- d.


● Features of h stat are given as follows:

a) Here, it doesn't matter how many explanatory variables or lagged values are used in a model, the only concern is the requirement of the variance of coefficient of Y t-1, Y t-2, Y t -3, ... etc.

b) The test is not applicable when; n[Var (aplha)] > 1.

c) Since, its a large sample test it's application in small samples is not generally valid.

d) One can use BG-LM test as it's statistically powerful as sample size is very large. In sample (small) BG or LM is preferred over h test stats.

Now guys, if we get autocorrelation by Durbin h test or BG LM test then what should we do ?

The answer is, we should use " New West -HAC" procedure to estimate the parameters. 1st we have to solve the problem of correlation between Y t-1 and V t on the one hand using proxy, then we have to use "New West - HAC" standard errors to solve the problem of autocorrelation.

Thank you guys for reading this. I am sorry for the notations as i am writing these with my cell phone.

In the coming days, i will be writing about more.


Aditya Raz Pokhrel

MBA, MA Economics, MPA. 


Regression Discontinuity - How to determine whether it is Sharp or Fuzzy RD ? Simplest Look.           Regression discontinuity design is ga...