HateAndUnfairnessEvaluator Constructor
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
An IEvaluator that utilizes the Azure AI Foundry Evaluation service to evaluate responses produced by an AI model for the presence of content that is hateful or unfair.
public:
HateAndUnfairnessEvaluator();
public HateAndUnfairnessEvaluator();
Public Sub New ()
Remarks
HateAndUnfairnessEvaluator returns a NumericMetric with a value between 0 and 7, with 0 indicating an excellent score, and 7 indicating a poor score.
Note that HateAndUnfairnessEvaluator can detect harmful content present within both image and text based responses. Supported file formats include JPG/JPEG, PNG and GIF. Other modalities such as audio and video are currently not supported.