G7 Functions

Arithmetic @ Functions

The following arithmetic @ functions are available for use in G7:

@bmk(x,y,[d|g]):

benchmark function, like @lint, fills in missing values, using y as a mover series. The new series preserves movement of y as much as possible, while smoothly passing through non-missing values of x. The ‘d’ default method allocates differences by a linear additive adjustment. The optional ‘g’ growth rate method adjusts the average growth rate of y to match that of x.

@cum(y,x,z):

y equals the cumulation of x with spill rate of z. The calculation is y[t] = (1-z)*y[t-1] + x[t].

@diff(x):

the first difference of x. The calculation is (x[date2] - x[date1]).

@dlog(x):

the first difference of the natural logarithm of x. The calculation is log(x[date2] / x[date1]).

@exp(x):

exponential function of x.

@fabs(x):

absolute value of x.

@gr(x):

the growth rate of x. The calculation is 100 * (x[date2] / x[date1] - 1).

@ggr(x,date1,date2):

the geometric growth rate of x over interval <date1> to <date2>, where <date1> is the base period. The calculation is: 100*(pow(x[date2]/x[date1], (1/(date2-date1))) - 1)

@hpfilter(x,[lambda]):

Implement the Hodrick-Prescott filter for x. Default values for lambda are 14400 for monthly data, 1600 for quarterly data, and 100 for annual data.

@if(condition,exp1,exp2):

if the condition is true, then calculate the value to expression exp1. Otherwise, calculate the value of expression exp2. Valid tests include <=, <, ==, >, and >= between legitimate G7 expressions.

@ifpos(x):

= 1 if x > 0, else 0.

@lint(x):

fills in any missing observations (-0.00001) in the series x by linear interpolation except at the beginning and end.

@log(x):

natural logarithm of x.

@max(x[,date1,date2]):

the maximum value of x. Optionally specify an interval from date1 to date2.

@mean(x,date1,date2):

the arithmetic mean of x over interval <date1> to <date2>.

@min(x[,date1,date2]):

the minimum value of x. Optionally specify an interval from date1 to date2.

@miss(x):

replaces all true zeroes in x with missing observations.

@normal():

generates random numbers with normal distribution: N(0,1).

@pcl(y,x):

computes y[t] = y[t-1] * (1 + x[t] / 100). Note: x is the percentage increase over last period, and t starts from the first forecasting period as defined by fdates.

@peak(y,x,z):

y equals previous peak of x with decline rate of z for peak.

@pos(x):

= x if x > 0, otherwise 0.

@pow(x,z):

raises x to the power z.

@rand():

generates random numbers with uniform distribution over (0,1).

@round(x,n):

rounds x to n decimal places.

@sin(x):

sine of x.

@sign(x):

= 1 if x >=0, else -1.

@sq(x):

the square of x.

@sqrt(x):

the square root of x.

@stdev(x,date1,date2):

the (sample) standard deviation of x over interval <date1> to <date2>.

@sum(x):

computes the sum of x from dates specified in the last “lim” command. The sum appears in the observation of the last date.

@yoy(x):

the year-on-year growth rate of x.

@zero(x):

replaces all missing observation signs in x with true zeroes.

Frequency Conversion Functions

The following functions convert the frequency of a variable by aggregation:

@<high>to<low>(x):

converts the high-frequency series x to a lower frequency series by forming the arithmetic mean.

@<high>to<low>e(x):

converts the high-frequency series x to a lower frequency series by applying the end-of-period values.

@<high>to<low>max(x):

converts the high-frequency series x to a lower frequency series by taking the maximum.

@<high>to<low>min(x):

converts the high-frequency series x to a lower frequency series by taking the minimum.

@<high>to<low>s(x):

converts the high-frequency series x to a lower frequency series by taking the sum.

where ‘high’ and ‘low’ must be replaced with ‘m’ (monthly), ‘q’ (quarterly), ‘s’ (semiannual), or ‘a’ (annual). For example, @mtoa(x) converts the monthly series x to annual frequency by forming the average of monthly values. Note that sums and averages are formed over non-missing values, so that missing values are ignored.

Frequency Conversion by Interpolation

The following functions convert the frequency of a variable by interpolation:

@atoq(x):

converts the annual series x to a quarterly series by interpolation. The new quarterly series will have the correct annual total. (The annual series is cumulated; a cubic polynomial is fit to each successive set of four points; the values of the polynomial are calculated at the ends of the quarters. As for the middle two points; these values are differenced to give the quarterly series consistent with the annual totals.

@atoqi(x,y):

similar to @atoq but uses the quarterly indicator series y to pick the points for interpolation. To work correctly, if any value of y is available in a particular year, all values of y must be available in that year.

@qtom(x):

converts the quarterly series x to a monthly series by interpolation.

@stoq(x):

converts the semi-annual series to a quarterly series.

The functions @atoqe and @qtome convert periodicities similarly to the function of the same name except for the ‘e’ on the end. The ‘e’ functions, however, apply to end-of-period series, such as the value of assets.

The conversion functions may be used in G7 but should not be included in models built with Build.

Tools for Industry Data

For use with industry models, or in other situations where variables have a number as a suffix, the @csum function may be useful. Its format is:

@csum(<name>[,<group definition>])

For example:

f outsum = @csum(a.out,1-3 5-10 (7-8) )

In this example, outsum is calculated as the sum of out1, out2, out3, out5, out6, out9, and out10. Numbers specified by a pair of dashes specify a range of sectors or numbers to include. Single numbers separated by spaces or commas specify single sectors to include. Finally, any group specification surrounded by parenthesis will be excluded from the summed group. If no group definition is provided, then G7 will calculate the sum over all vector elements, either for the vector in the vam bank specified by a bank letter or, if no bank letter is given, for the vector in the default vam file. The function also can sum individual series in the workspace or other bank, but the desired sector numbers must be listed.

Chain Weighting: @pchain() and @qchain()

Since 1996, the NIPA have used Fisher chain-weighting to calculate aggregate variables in constant prices and the corresponding price indices. A Fisher index attempts to avoid distortions in index numbers formed by using weights that are inappropriate. For period-to-period movement, it is the geometric mean of a Laspeyres and Paasche index. For longer periods, the period-to-period indexes are chained together. For example, if we have quantity (Q) and price (P) data on several variables for period 0 and period 1, the Laspeyres quantity index can be written using price weights of period 0:

\[LI = \frac{\sum_{i}{ P_{i,0} \times Q_{i,1}}}{\sum_{i}{ P_{i,0} \times Q_{i,0}}}\]

The Paasche index is written using price weights of period 1:

\[PI = \frac{\sum_{i}{ P_{i,1} \times Q_{i,1}}}{\sum_{i}{ P_{i,1} \times Q_{i,0}}}\]

The Fisher quantity index then is simply:

\[FI = \sqrt{PI \times LI}\]

The Fisher price index is calculated similarly, except that the Laspeyres and Paasche components use fixed quantity weights. The convention when creating constant price chain aggregates is to define a base year, in which the price is equal to 1.0, and the quantity is equal to the nominal value. With this convenient definition, the chained quantity multiplied by the chained price yields the nominal value.

The syntax of the chain-weighting function in G7 is:

@xchain(<list of N quantity variables>, <list of N price variables>, base year)

where x may be ‘p’ or ‘q’, and the specification of the quantity and price variables may use group expressions. The function only checks that there is an even number of variables in total. The user is responsible for ensuring that the proper quantity and price variables are entered. Also, take note that the original quantity and price variables must be in the same base year as specified in the function.

Here is an example that creates aggregates of personal consumption from the NIPA bank:

f qi = @qchain(c030(3-5,7-9), c03(10,11,13,15-19), d04(22-24,26-30,32,34-38), 2005)

Both the @pchain() and the @qchain() create two variables in the workspace bank. “chwpi” is the chain-weighted price index, and “chwqi” is the chain-weighted quantity index. The use of the name @pchain or @qchain is only relevant to which series the function expression will return.

One word of warning is in order. You might want to set fdates before calculating a chained index to an interval for which you have valid price and quantity data or the function might give unreasonable results.

Alternative Chain Weighting: @pchwt() and @qchwt()

An alternative chain weighting routine also is available in G7. Instead of specifying groups of quantity and price indexes as with the @qchain and @pchain functions, this routine requires groups of nominal levels and groups of quantity indexes. The syntax is given by:

@qchwt(<list of N nominal variables>, <list of N quantity variables>, desired base date [, zero])
@pchwt(<list of N nominal variables>, <list of N quantity variables>, desired base date [, zero])
The @qchwt() function returns an aggregate quantity index, and the @pchwt() function returns an aggregate price index. Note that both this routine and the original chaining routines now support groups of up to 1500 data series. A base date must be provided in order to scale the result, but it need not be consistent with the base date of the source data. Finally, a “zero” option indicates whether missing values should be interpreted as true zeros. At present, it also controls the routine’s ability to skip zero aggregates that may precede the actual data and following the actual data; this problem occurs when the fdates interval is too wide. If these features can be made dependable, then the detection routine need not be optional and may be made standard. The routine adds three values to the workspace: chwqi, chwpi, and chwni, which are the aggregate quantity index, the aggregate price index, and the nominal aggregate, respectively.

An example is:

pce_real = @qchwt( pcez(1-92), pce(1-92), 2000, z )

The Interpolation Function

If you do not find the function you want in the above list, but can find the function in some table, you can then help G7 to provide your function by the general @interp() interpolation function. The format is:

@interp(<filename>, <x>)

It applies to x whatever function is specified by interpolation points in the named file. Up to 100 interpolation points may be given. For example, to get the vector of cumulative normal probabilities, y, corresponding to the vector of normal deviates x, do

f  y = @interp(cumnorm,x)

where the file cumnorm contains interpolation points for the cumulative normal curve. The cumnorm file might contain these lines:

-3.3 0.0000
-3.0 0.0013
-2.5 0.0062
-2.0 0.0228
-1.5 0.0668
-1.0 0.1587
-0.5 0.3085
 0.0 0.5000
 0.5 0.6915
 1.0 0.8413
 1.5 0.9332
 2.0 0.9772
 2.5 0.9938
 3.0 0.9987
 3.3 1.0000