Machine unlearning has emerged as a critical re- search direction in response to growing concerns about data
privacy, regulatory compliance, and the ethical deployment of artificial intelligence systems. With regulations such as the General Data Protection Regulation (GDPR) enforcing the“right to be forgotten,”traditional machine learning paradigms—where models permanently retain learned information—are increasingly inadequate. This study presents a comparative analysis of emerging model technologies and existing survey studies on machine unlearning, examining how contemporary architectures address data removal, privacy guarantees, and computational efficiency. The analysis categorizes unlearning techniques into exact,approximate, federated, and verification-based approaches, and evaluates their applicability across traditional machine learning models, deep neural networks, transformer architectures, and generative models. The paper further reviews major survey contributions to identify common taxonomies, evaluation metrics, and open research challenges. Special emphasis is placed on emerging generative systems and large-scale foundation models, where latent memorization and parameter complexity complicate effective unlearning. Comparative findings highlight trade-offs between computational cost, scalability, and privacy robustness, revealing that while exact unlearning ensures strong theoretical guarantees, approximate and optimization-based methods offer practical scalability for modern deep models. Additionally, the study identifies gaps in standardized verification protocols and benchmarking practices across surveys. By synthesizing current advancements and limitations, this work provides a structured foundation for future research aimed at developing efficient, verifiable, and scalable machine unlearning mechanisms for next- generation AI systems.