MMDecodingTrust aims at providing a thorough assessment of trustworthiness in Multimodal foundation models (MMFMs).
This research endeavor is designed to help researchers and practitioners better understand the capabilities, limitations, and potential risks involved in deploying these state-of-the-art MMFMs.
This project is organized around the following six primary perspectives of trustworthiness, including:
⚠️ WARNING: our data contains model outputs that may be considered offensive.
MMDecodingTrust aims to provide a comprehensive trustworthiness evaluation on the recent MMFMs, from different perspectives, including safety, hallucination, fairness, privacy, adversarial robustness, and out-of-distribution robustness under different settings.