This thesis will explore Bayesian approaches to managing uncertainty in applied machine learning models, focusing on methods like Bayesian neural networks, Gaussian processes, and variational inference. By incorporating uncertainty into predictions, these models aim to improve robustness and interpretability, particularly in noisy or limited data settings. The work will compare Bayesian models to traditional deterministic approaches, evaluate their effectiveness in real-world applications, and explore advanced techniques such as Hamiltonian Monte Carlo or Variational Inference for more accurate posterior estimation.