Another tool that Rob Petrosino reviewed was New Meta AI, which is capable of performing tasks traditionally handled by machine learning engineers. The system's segmentation tool can take prompts from other systems and identify objects in images based on eye tracking, bounding box prompts, and other text prompts. However, the system does not label objects in new datasets.
The zero-shot generalization capabilities of New Meta AI allow for the segmentation of unfamiliar objects, while the segmentation tools can quickly and efficiently segment out different dogs, sticks, or backgrounds.
The video version of the model runs on a bi-directional data stream, allowing for interactive tracking and repeated analysis. The model is highly efficient and can run cloud systems on key frames of video, and the code is open-source on Github. All of these aspects including prompts supported by the model, the data it was trained on, the time required to train the model, and the size of the model parameters can be found on the Segment Anything website.
Overall, New Meta AI's segmentation and zero-shot generalization capabilities open up new possibilities for AI-powered image segmentation and object recognition, especially in industries such as healthcare and manufacturing.