In the rapidly evolving era of big data, predictive analytics has become a crucial approach in supporting data-driven decision-making across various sectors such as finance, healthcare, and marketing. However, the effectiveness of predictive models is highly dependent on the quality of features utilized in model training. This study aims to evaluate and compare various feature engineering techniques to enhance the accuracy of predictive models based on Random Forest (RF) and Extreme Gradient Boosting (XGBoost) algorithms. The research employs a quantitative experimental approach by applying different feature engineering techniques, including SHAP-based feature importance, Principal Component Analysis (PCA), and categorical variable encoding. The evaluation results indicate that the implementation of SHAP-based feature importance yields the best outcomes, with a Mean Squared Error (MSE) of 0.150 and a Root Mean Squared Error (RMSE) of 0.387 in the XGBoost model. These values outperform those without feature engineering, which recorded an MSE of 0.230 and an RMSE of 0.479. The combination of PCA and encoding techniques also shows a significant performance improvement with an MSE of 0.160 and an RMSE of 0.400. The XGBoost algorithm consistently demonstrates superior performance compared to RF across various testing scenarios. The contribution of this study lies in its recommendation of appropriate feature engineering techniques to improve the predictive quality of Machine Learning (ML) models. This research provides insights for researchers and practitioners in developing more effective feature engineering strategies and opens opportunities for exploring advanced techniques in more complex data domains.