The feature that protects children from viewing nudity in iMessage has been announced a while ago. In December 2021, it was launched for American users. Now, Apple is to finally roll it out globally. Being AI-assisted, the feature does not involve any humans to do the censorship, but it’s still perceived quite controversially.
First it was said that the AI would compare shared images to those from a base of recognized pictures with nudity. This approach implied that Apple has such a base at its disposal and updates it regularly, which may be perceived as something strange. Now it turns out the mechanism is different. The AI chooses which images to blur, using machine learning. Where does it get reference materials for this learning? The most logical way is to react to users’ complaints.
The feature has to be activated manually. First, make sure your device is updated to at least iOS/iPadOS 15.2 or macOS Monterey 12.1 or later. You should also log in under your Apple ID and make sure your child’s device is registered within your Family Group. Then go to Settings/Screen Time/Communication Safety and activate the “Check for Sensitive Photos” option. If your child's device’s protected with a password, it may be required.
If your child receives a picture that possibly contains nudity and is therefore blurred, they can contact you and ask you to check. If the child is under 13, the offer appears each time a flagged image is received (though it’s not necessary). Then, it’s up to you to see the picture and decide whether it is really harmful or is mistakenly recognized as such. Your feedback may be used to further improve the algorithms.
The global launch won’t happen simultaneously in all the countries. The first to receive the new feature will be the UK, and also Canada, Australia, and New Zealand. What do you think of this protective measure? Share your views with us in the comments!
Leave a comment
Your comment is awaiting moderation. We save your draft here
0 Comments