AI Image Validation

X2Earn Apps that send captured photos for image analysis to AI should consider adding specific tasks in their AI prompts to detect:

  • Image quality

  • Doctored or Unrealistic modifications

  • Photo of a computer screen

  • Watermarks

  • Painted or hand-drawn text replacing real data

... or other unrealistic items that a "fake" photo could capture.

Example:

In this example we are assuming that we are taking a photo from a dApp runs inside VeWorld and so the photo will be captured by the users mobile camera.

The backend of the app would like the AI to return a json structure giving:

{
  "evaluation_feasible": true,
  "doctored_unrealistic_score": 0.0,
  "doctored_unrealistic_reasons": [],
  "screen_capture_score": 0.0,
  "screen_capture_reasons": [],
  "watermark_score": 0.0,
  "watermark_reasons": [],
  "watermark_text": "",
  "painted_text_score": 0.0,
  "painted_text_reasons": [],
  "final_label": "clean",
  "final_confidence": 0.0
}

Where:

Item
Description

evaluation_feasible

True if the quality of the image is sufficient to perform other checks, False if the quality is poor and other checks could be unreliable

doctored_unrealistic_score doctored_unrealistic_reasons

A score between 0-1 if the AI has detected doctored or unrealistic items in the image. The reasons array will be populated with a summary

screen_capture_score screen_capture_reasons

A score between 0-1 if the AI has detected that the image was taken from a computer screen. The reasons array will be populated with a summary

watermark_score watermark_reasons

A score between 0-1 if the AI has detected that the image contains watermarks.

The reasons array will be populated with a summary

painted_text_score painted_text_reasons

A score between 0-1 if the AI has detected user drawn text in the image. The reasons array will be populated with a summary

final_label final_score

A final classification label of:

  • clean

  • doctored_unrealistic

  • screen_capture

  • watermarked

  • handdrawn

  • multiple_flags

  • inconclusive

A final score between 0-1 as the level of confidence in the classification

We can use a multi-stage prompt to guide the AI through these tasks:

Last updated

Was this helpful?