

The coefficient associated with the constant term is called the bias, offset or y-intercept, and the constant basis term is called the bias term.

Most regressions include a constant in the basis, e.g., $ b_0(\bar x)=1 $, although this is not absolutely required. The hyperplane example could be encoded as , x_n $, you can fit a hyperplane using $ b_0(\bar x) $=1 and $ b_k(\bar x)=x_k $ for $ k=1.n $.

Or when you have several variables, $ x_1, x_2. When fitting an n th-degree polynomial of one-variable, x, the basis functions are $ b_k(x)=x^k $ for $ k=0.n $. Sometimes the basis functions are trivial functions, such as $ b_0(x)=1 $ and $ b_1(x)=x $, for $ k=0,1 $, which leads to the one-variable linear model: Where for any single data point, $ \bar x $ is a vector of values, know as the independent values, $ b_k(\bar x) $ are arbitrary functions of $ \bar x $ known as the basis functions, and $ y $ is the dependent value. See Regression analysis in Analytica User Guide for basic usage. This can be omitted when «b» is a scalar or has an implicit index (like a list) for the different basis terms. «K» (Optional)Basis index, or list of independent variables and, usually, a constant. Each element of «I» corresponds to a different data point. «b» Values of the basis values (independent variables), indexed by «I and «K». «y» Values of the dependent variable, indexed by «I». Regression uses least-squares estimation, meaning that it minimizes the sum of squares of the residuals (estimation error) - the difference between actual and estimated values: Where Y_est contains estimated values of Y. Variable C := Regression(Y, B, I, K) Variable Y_est := Sum(C*B, K) Given data points for a dependent variable «y» indexed by « I» and data for a "basis" (independent variables) «b» indexed by «I» and basis index «K», it returns coefficients C for a linear model: 7 Plotting Regression lines Compared to Data.
