Apple has brought significant innovations to the Visual Intelligence feature with iOS 26. This Apple Intelligence feature, introduced last year, now works not only with the camera but also with the screen content, and has reached a level where it can compete with similar features on Android phones. Here are all the innovations…
Screen content recognition
Visual Intelligence only worked with the camera in iOS 18. However, in iOS 26, it can also be used with the content on your device’s screen. Now, you can take a screenshot of things on the screen, recognize what you are looking at with Visual Intelligence, find images, and get more information via ChatGPT.
How to use Visual Intelligence for screen content recognition?
Visual Intelligence for screenshots works the same way as Visual Intelligence in the camera app, but it is located in the screenshot interface. Take a screenshot (press the volume up button and the side button at the same time) and then exit the Markup interface if it appears. To exit Markup (default view), tap the small pencil icon at the top of the screen. You should see the Visual Intelligence options from there.
Point to Search
With Visual Intelligence’s Point to Search feature for screen content recognition, you can draw on the object you want to search in a screenshot with your finger. It’s similar to Android’s Circle to Search feature.

Point to Search lets you visually search for a specific object in a screenshot, even if there’s more than one thing in the image. Google Image Search is used by default, but Apple showed that the feature works with other apps like Etsy at the launch event. Apps need to support the feature.
In some cases, Visual Intelligence recognizes individual objects in an image on its own, and you can tap on them without having to use Point to Search. This is similar to object recognition in the Photos app, but it still leads to visual search.
Ask and Search
If you don’t need to isolate a single object in your screenshot, just tap the Ask button to ask a question about what you see. The questions are passed to ChatGPT, which provides the information. The Search button queries Google Search for more information.

As with standard Visual Search, if your screenshot includes the date, time, and related information for an event, it can be added directly to your calendar.
New object recognition
Apple didn’t say, but Visual Intelligence is adding support for fast recognition of new object types. It can now recognize artwork, books, landmarks, natural monuments, and statues in addition to the animals and plants it could previously provide information on.
If you use Visual Intelligence on an object it can recognize, a small glowing icon appears. Tapping it will show information about the thing in the image. The nice thing about this aspect of Visual Intelligence is that it works in a live camera view or on a captured photo.

For standard Ask and Search requests using Visual Intelligence, you need to take a photo so that it can be forwarded to sources like ChatGPT or Google Image Search. Artworks, books, landmarks, natural monuments, statues, plants, and animals can be recognized on the device without connecting to another service.
Compatibility
Visual Intelligence is limited to devices that support Apple Intelligence. These devices are the iPhone 15 Pro models and the iPhone 16 models. It is activated by long-pressing the Camera Control button on devices with a Camera Control button, or by using the Action Button or Control Center toggle.
Release date
iOS 26 is currently in beta testing but will be released to the public in September.