Article Details
Retrieved on: 2024-07-20 10:47:09
Tags for this article:
Click the tags to see associated articles and topics
Summary
The article explores how explanations provided by 'Explainable AI' (XAI) enhance the interpretability of various AI models used for land use and land cover (LULC) classification. It compares transformer-based models' performance on datasets such as EuroSAT, integrating interpretability tools like Captum to pinpoint influential image regions and ensure accuracy and transparency in predictions, demonstrating the effectiveness of XAI in practical applications. Tags: 'Artificial neural networks, Machine learning, Image processing, Vision transformer, Deep learning, Large language models, Fine-tuning, Neural network, Training, validation, and test data sets'
Article found on: www.nature.com
This article is found inside other hiswai user's workspaces. To start your own collection, sign up for free.
Sign UpAlready have an account? Log in here