In this paper, a new feature generation method for pattern recognition is proposed, which is approximately derived from geometrical interpretation of the Fisher linear discriminant analysis (FLDA). In a field of pattern recognition or signal processing, the principal component analysis (PCA) is popular for data compression and feature extraction. Furthermore, iterative learning algorithms for obtaining eigenvectors in PCA have been presented in such fields, including neural networks. Their effectiveness has been demonstrated in many applications. However, recently the FLDA has been used in many fields, especially face image analysis. The drawback of FLDA is a long computational time based on a large-sized covariance matrix and the issue that the within-class covariance matrix is usually singular. Generally FLDA has to carry out minimization of a within-class variance. However in this case the inverse matrix of the within-class covariance matrix cannot be obtained, since data dimension is generally higher than the number of data and then it includes many zero eigenvalues. In order to overcome this difficulty, a new iterative feature generation method, a simple FLDA is introduced and its effectiveness is demonstrated for pattern recognition problems.